Goto

Collaborating Authors

 artificial general intelligence


A social network for AI looks disturbing, but it's not what you think

New Scientist

A social network for AI looks disturbing, but it's not what you think A social network solely for AI - no humans allowed - has made headlines around the world. Chatbots are using it to discuss humans' diary entries, describe existential crises or even plot world domination . It looks like an alarming development in the rise of the machines - but all is not as it seems. Like any chatbots, the AI agents on Moltbook are just creating statistically plausible strings of words - there is no understanding, intent or intelligence. And in any case, there's plenty of evidence that much of what we can read on the site is actually written by humans.


HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face

Neural Information Processing Systems

Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence. While there are numerous AI models available for various domains and modalities, they cannot handle complicated AI tasks autonomously. Considering large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks, with language serving as a generic interface to empower this. Based on this philosophy, we present HuggingGPT, an LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. By leveraging the strong language capability of ChatGPT and abundant AI models in Hugging Face, HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards the realization of artificial general intelligence.


What will your life look like in 2035?

The Guardian

What will your life look like in 2035? When AIs become consistently more capable than humans, life could change in strange ways. It could happen in the next few years, or a little longer. If and when it comes, our domestic routines - trips to the doctor, farming, work and justice systems - could all look very different. The'AI' doctor will see you now In 2035, AIs are more than co-pilots in medicine, they have become the frontline for much primary care.


On the Computability of Artificial General Intelligence

Mappouras, Georgios, Rossides, Charalambos

arXiv.org Artificial Intelligence

In recent years we observed rapid and significant advancements in artificial intelligence (A.I.). So much so that many wonder how close humanity is to developing an A.I. model that can achieve human level of intelligence, also known as artificial general intelligence (A.G.I.). In this work we look at this question and we attempt to define the upper bounds, not just of A.I., but rather of any machine-computable process (a.k.a. an algorithm). To answer this question however, one must first precisely define A.G.I. We borrow prior work's definition of A.G.I. [1] that best describes the sentiment of the term, as used by the leading developers of A.I. That is, the ability to be creative and innovate in some field of study in a way that unlocks new and previously unknown functional capabilities in that field. Based on this definition we draw new bounds on the limits of computation. We formally prove that no algorithm can demonstrate new functional capabilities that were not already present in the initial algorithm itself. Therefore, no algorithm (and thus no A.I. model) can be truly creative in any field of study, whether that is science, engineering, art, sports, etc. In contrast, A.I. models can demonstrate existing functional capabilities, as well as combinations and permutations of existing functional capabilities. We conclude this work by discussing the implications of this proof both as it regards to the future of A.I. development, as well as to what it means for the origins of human intelligence.


Will Humanity Be Rendered Obsolete by AI?

Louadi, Mohamed El, Romdhane, Emna Ben

arXiv.org Artificial Intelligence

This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.


There Is Only One AI Company. Welcome to the Blob

WIRED

There Is Only One AI Company. As Nvidia, OpenAI, Google, and Microsoft forge partnerships and deals, the AI industry is looking more like one interconnected machine. What does that mean for all of us? It all began, as many things do, with Elon Musk . In the early 2010s he realized that AI was on a track to become perhaps the most powerful technology of all time.


The Man Who Invented AGI

WIRED

Everyone is obsessed with artificial general intelligence--the stage when AI can match all feats of human cognition. The guy who named it saw it as a threat. In the summer of 1956, a group of academics--now we'd call them computer scientists but there was no such thing then--met on Dartmouth College campus in New Hampshire to discuss how to make machines think like humans. One of them, John McCarthy, coined the term "artificial intelligence." This legendary meeting and the naming of a new field, is well known.


Improving AGI Evaluation: A Data Science Perspective

Hawkins, John

arXiv.org Artificial Intelligence

Evaluation of potential AGI systems and methods is difficult due to the breadth of the engineering goal. We have no methods for perfect evaluation of the end state, and instead measure performance on small tests designed to provide directional indication that we are approaching AGI. In this work we argue that AGI evaluation methods have been dominated by a design philosophy that uses our intuitions of what intelligence is to create synthetic tasks, that have performed poorly in the history of AI. Instead we argue for an alternative design philosophy focused on evaluating robust task execution that seeks to demonstrate AGI through competence. This perspective is developed from common practices in data science that are used to show that a system can be reliably deployed. We provide practical examples of what this would mean for AGI evaluation.


Limitations on Safe, Trusted, Artificial General Intelligence

Panigrahy, Rina, Sharan, Vatsal

arXiv.org Artificial Intelligence

Safety, trust and Artificial General Intelligence (AGI) are aspirational goals in artificial intelligence (AI) systems, and there are several informal interpretations of these notions. In this paper, we propose strict, mathematical definitions of safety, trust, and AGI, and demonstrate a fundamental incompatibility between them. We define safety of a system as the property that it never makes any false claims, trust as the assumption that the system is safe, and AGI as the property of an AI system always matching or exceeding human capability. Our core finding is that -- for our formal definitions of these notions -- a safe and trusted AI system cannot be an AGI system: for such a safe, trusted system there are task instances which are easily and provably solvable by a human but not by the system. We note that we consider strict mathematical definitions of safety and trust, and it is possible for real-world deployments to instead rely on alternate, practical interpretations of these notions. We show our results for program verification, planning, and graph reachability. Our proofs draw parallels to Gödel's incompleteness theorems and Turing's proof of the undecidability of the halting problem, and can be regarded as interpretations of Gödel's and Turing's results.


Why One VC Thinks Quantum Is a Bigger Unlock Than AGI

WIRED

Venture capitalist Alexa von Tobel is ready to bet on quantum computing--starting with hardware. Alexa von Tobel is fully aware that her big bet on quantum computing may never pay off. "The risk of being too early is a real risk," she says. She's speaking to me via Zoom from the New York City office of Inspired Capital, the early-stage venture capital firm she runs with former US commerce secretary Penny Pritzker. In addition to personally investing in blue-chip brands like Uber and Airtable, von Tobel has backed a number of AI startups through Inspired Capital, including BrightAI (a platform that monitors critical infrastructure) and PreemptiveAI (a startup building a foundation model to map human physiology and predict health outcomes).